Quantcast
Viewing all 1079 articles
Browse latest View live

Windows movie maker: Video is playing but i cant save

it says 

"Windows movie maker cannot save the movie to the specified location. Verify that original source files used in your movie art still available, and that there is enough free disk space available, then try again"

on the preview box, it is playing but whenever i save it, that thing pops out

there'e enough memory space i dont know what to do


Scrubb on the last frame of a video

Hi,

I am trying to position in a Video by means of a slider. What I want to do is to Scrubb at the position indicated by the slider. What I have done so far is to set the play rate to 0, and call the start function with the time position that corresponds to the slider. Everything works fine until I get to the end of the file. When the end is reached the application does't respond in the same way.

I want it to stay on the last frame. What happens is that the screen either goes black or you can only see the background (depending on repaint on stop is enabled).

From my understanding, with Media Foundation, as soon as the end of the file is reached a   MEEndOfPresentation event is fired and the source and transforms are stopped.

Is there a way to achieve what I want to do?

Cheers,

Alin 

Flushing msmpeg2vdec.dll internal buffers...

Is there a way to flush msmpeg2vdec.dll's internal buffers?  We are trying to track down a possible memory

leak with our H264 media source and it seems that the h264 decoder has internal buffers that grow

to rather large sizes.   Is there a Media Foundation Event that can be sent to periodically free this memory?

Thanks,

Jay


Jay

SinkWriter and MPEG2 TS

Is there anything specific that needs to be done to the SinkWriter to configure it to write MPEG2 Transport Stream?

In an effort to write a TS segmenter I'm getting MF_E_INVALIDMEDIATYPE on WriteSample. The MFCopy sample exhibits the same behavior.  

H.264/AAC compressed samples are coming from another TS file through the SourceReader. The end goal would be to locate the IDR boundaries and start a new file when one is detected. No transcoding, just a simple remux from TS to TS.

Any advice would be much appreciated. Thanks.

Can Window Media Player 12 handle playlist with time codes?

I am very new to Window Media Player, but I have a very good knowledge in programming.

I am wondering if the latest Window Media Player handle a playlist composed of the Time Code of the Start and Ending of a video clip ?

I have seen a demo of a playlist created by putting various video clips into a folder and directing WMP to play it.

I would like to have a video file and play video clips as defined from the Time Code or Time Tag of the starting point and play to the Ending point.    Can WMP handle this and how do I use it ???

I normally would like to use a powerpoint slide show with an embedded WMP with the defined time codes to play a selected video.  Each of my slides would have a different time code to show different video clips.

I would appreciate any suggestions on this matter !!

Thank You,

MFDub sample does't work

I am writing a tool to read video and audio data from one media file and then write those to a new video file. I can sample and transfer just video data frame by frame successfully. But I don’t know how to do it with audio altogether. I found an article "http://blogs.msdn.com/b/mf/archive/2010/03/12/mfdub.aspx" talking about this. It is published by Media Foundation Team. However, when I tried the sample codes “MFDub” shared on the article, I got an error when line “CHECK_HR( hr = pThis->m_spSinkWriter->BeginWriting());” in mediatranscoder.cpp is invoked. It says “hr = 0xc00d36b4 : The data specified for the media type is invalid, inconsistent, or not supported by this object.” This is an error of MF_E_INVALIDMEDIATYPE. Do you know what’s causing this and how to fix it?

Simple Audio Playback

I've read through msdn: http://msdn.microsoft.com/en-us/library/windows/desktop/dd317914%28v=vs.85%29.aspx

Also studied the very helpful sample: http://msdn.microsoft.com/en-us/library/windows/desktop/ff728866%28v=vs.85%29.aspx

(p.s. though to be honest, I think the code would be a lot easier to study if the author hadn't separated everything into so many functions... I haven't gone through any other official samples, so is this like a MS official coding practice? anyway...)


Now, the sample does some things I find strange though.

1) Do I need to create a new IMFMediaSession whenever I want to change file?

I'd think I just could use the same one for the duration of the applications life-time, right?


2) Do I ever need to recreate the IMFTopologyNode for output?

If I'm, say, only dealing with audio, I can create it once at startup:

call  MFCreateTopologyNode to create the output

call MFCreateAudioRendererActivate to create the activate

Set the IActivate to the output pNode->SetObject(pRendererActivate)

and then use the same topology node for the duration of the applications life-time.


3) Is there any point in listening to the MESessionTopologyStatus event?

Is it possible for the topology of a mediaSession to change by itself? The MF_BasicPlayback sample listens to this event and starts playing whenever there is a change in topology, but if that can't happen without the user requesting a different URI then one might as well just start the media Session in the same function, and not bother with this particular event.


4) Switching Files.

Now, whenever I want to play a new file, I have to create a new IMFMediaSource.

Now, if I'm already playing a file, I must release its IMFMediaSource, otherwise I'll have a memory leak, right?

But I should not call ShutDown, as that should only be done when I close the application.

I also guess I shouldn't ever call stop on the IMFMediaSource, cause I can't get any other file to play after i do so (for some reason, any ideas why?).

-Thanks

p.s. this is on Win7.







Multiple topology for playing multiple video files

I would like to build multiple topology to play multiple video files at the same time.

The following directshow graph is what I want to do: play two avi files in the same graph
Image may be NSFW.
Clik here to view.

But when I use Media Foundation to build topology, things are different

(1) I use AVFSource media source (from the book Developing Microsoft Media Foundation Applications Chapter 6) to loadAVI_Wildlife.avf
(2) First, I add media source and sink
(3) Then, I press play button

Image may be NSFW.
Clik here to view.

The result is that TopoEdit will show an error message and no video is shown

I don't know why. Could you do me a favor?

Thank you very much




How do I find the registry key on an Asus with windows 8.1?

My Netflix isn't working and I was reading that I need to set my registry key from "1" to "0". The thing is, I don't know how to do that. Please help me!!

IMFPluginControl not working for MFVideoFormat_H264

I have a custom MFT that is registered for MFVideoFormat_H264. But this MFT is not picked up during normal topology resolution. Instead the topology loader always picks "Microsoft H264 Video Decoder MFT" ({62CE7E72-4C71-4D20-B15D-452831A87D9D}).

I'm using IMFPluginControl like this:

pluginControl->SetPreferredClsid(MF_Plugin_Type_MFT, L"{34363248-0000-0010-8000-00AA00389B71}", &myId);
pluginControl->SetDisabled(MF_Plugin_Type_MFT, __uuidof(CMSH264DecoderMFT), TRUE));

Setting just the preferred CLSID should be enough (and works for other formats). But even explicitly disabling the MS decoder has no effect.

Any ideas what's wrong?

Audio/Video Synchronization ; Deinterlacing

Hello community!

I have some questions regarding Audio/Video synchronization and video deinterlacing.

My scenario is the following (Windows Store App so the API is limited :) ):

I made a custom scheme handler for RTP streams. I split the stream (MP2TS) channels and create for every audio/video channel a stram (IMFMediaStream). So far, (nearly) everything works fine. My issues are deinterlacing of the video (h264es) and audio/video synchronizing. I have also tried to set the properties of the media type to various MFVideoInterlaceMode values. Without effect.

Regarding the synchronization: With the TS stream, I also get timestamps. But whenever I set the samples sampletime or duration, the video is stuttering...

Any help is very welcome!

Thanks in advance,

Michael


Low-level monitor configuration API problems

I'm making some tests with the Monitor Configuration API and due to my needs I've got to use the low level functions.

I've got no problems when the monitor is correctly plugged and set, however, if the monitor is in stand-by or another input is selected whenever I try to call either one of the low level or high level methods I get an error and my monitor completely stops responding to any of its buttons, I need to cut its power source and reconnect it. Tested on two completely different hardware configurations, and the previous call to GetNumberOfPhysicalMonitorsFromHMONITOR is correctly returning all the values.

It seems the amount of people using this API can be counted with the fingers of one hand and doesn't talk about this case.

 My code is written using VB.NET, and the API is declared as:

    Private Class NativeMethods

        <StructLayout(LayoutKind.Sequential, CharSet:=CharSet.Auto)>
        Public Structure PHYSICAL_MONITOR
            Public hPhysicalMonitor As IntPtr

            <MarshalAs(UnmanagedType.ByValTStr, SizeConst:=128)>
            Public szPhysicalMonitorDescription As String
        End Structure

        Public Enum LPMC_VCP_CODE_TYPE
            MC_MOMENTARY
            MC_SET_PARAMETER
        End Enum

        <DllImport("user32.dll", EntryPoint:="MonitorFromWindow")> _
        Public Shared Function MonitorFromWindow(ByVal hwnd As System.IntPtr, ByVal dwFlags As UInteger) As System.IntPtr
        End Function

        <DllImport("dxva2.dll", EntryPoint:="GetNumberOfPhysicalMonitorsFromHMONITOR", SetLastError:=True)>
        Public Shared Function GetNumberOfPhysicalMonitorsFromHMONITOR(hMonitor As IntPtr, ByRef pdwNumberOfPhysicalMonitors As UInteger) As Boolean
        End Function

        <DllImport("dxva2.dll", EntryPoint:="GetPhysicalMonitorsFromHMONITOR", SetLastError:=True)>
        Public Shared Function GetPhysicalMonitorsFromHMONITOR(hMonitor As IntPtr, dwPhysicalMonitorArraySize As UInteger, <Out()> pPhysicalMonitorArray As PHYSICAL_MONITOR()) As Boolean
        End Function

        <DllImport("dxva2.dll", EntryPoint:="DestroyPhysicalMonitors", SetLastError:=True)>
        Public Shared Function DestroyPhysicalMonitors(dwPhysicalMonitorArraySize As UInteger, pPhysicalMonitorArray As PHYSICAL_MONITOR()) As Boolean
        End Function

        <DllImport("dxva2.dll", EntryPoint:="GetCapabilitiesStringLength", SetLastError:=True)>
        Public Shared Function GetCapabilitiesStringLength(hMonitor As IntPtr, <Out()> ByRef pdwCapabilitiesStringLengthInCharacters As UInteger) As Boolean
        End Function

        <DllImport("dxva2.dll", EntryPoint:="CapabilitiesRequestAndCapabilitiesReply", SetLastError:=True)>
        Public Shared Function CapabilitiesRequestAndCapabilitiesReply(hMonitor As IntPtr, <Out(), MarshalAs(UnmanagedType.LPStr)> pszASCIICapabilitiesString As System.Text.StringBuilder, dwCapabilitiesStringLengthInCharacters As UInteger) As Boolean
        End Function

        <DllImport("dxva2.dll", EntryPoint:="GetVCPFeatureAndVCPFeatureReply", SetLastError:=True)>
        Public Shared Function GetVCPFeatureAndVCPFeatureReply(hMonitor As IntPtr, bVCPCode As Byte, <Out()> ByRef pvct As LPMC_VCP_CODE_TYPE, <Out()> ByRef pdwCurrentValue As UInteger, <Out()> ByRef pdwMaximumValue As UInteger) As Boolean
        End Function

        <DllImport("dxva2.dll", EntryPoint:="SetVCPFeature", SetLastError:=True)>
        Public Shared Function SetVCPFeature(hMonitor As IntPtr, bVCPCode As Byte, dwNewValue As UInteger) As Boolean
        End Function

    End Class

Anyway, I repeat, it works as expected when the monitor is not in stand mode and with the input source set to a different one. I could somewhat expect the API not to work otherwise (although I don't see the standards saying so), but not to completely freeze the control panel.

Has anyone else run into this problem or have some extra info to share?

MPEG-4 Media Sink Produces Incorrect Length of File

I have a transcoding topology to capture from a Firewire-connected VCR to an H.264 MPEG-4 file. The basic flow is as follows: DV Video Capture Source -> DV Decoder -> H.264 Encoder -> MPEG-4 Sink. (I create the MPEG-4 media sink using MFCreateMPEG4MediaSink). After I build the topology, and connect its nodes to each other, I load it using the topoloader, and queue it to a media session. I start the media session, and the transcoding starts, the output file grows accordingly. After about 30 seconds I stop the media session, finalize the MPEG-4 media sink using the IMFFinalizableMediaSink interface, then shut it down using the Shutdown method. My resulting file plays in WMP, VLC, etc., but its reported time is wrong (I assume the moov atom contains the wrong length somehow). Instead of something close to 30 seconds, the players report the file is about 27 minutes long.

What am I doing wrong here? Other than that the file is OK. I must mention that I do not set the presentation clock on the media sink, could this be the problem? If so, where do I obtain the presentation clock?

Thanks in advance.

Video processing with alpha channel

Hi everyone,

This is one of my first questions as I'm experienced developer but new to Windows SDK and DirectX so please apologize if the question mixes non-related concepts because I'm trying to figure out how Windows works with video.

First of all I will describe my scenario and what I'm trying to do.

I want to develop a C++ application with the Windows SDK v7.1 and DirectX SDK June 2010 which should be capable of handling 2 input videos with alpha channel (for each video). Then process those videos in order to blend them together (output 1 video) with the alpha channel information. The videos are .avi containers uncompressed (no encoding was applied).

As far as I know (thanks to MSDN forums and docs) this can be done in several ways:

1. With a DirectShow Filter: I found this solution a bit hard and too much for my requirements

2. Using DirectShow Editing Services: I've tried to write a very simple app but always getting an error related with qedit.h and as Microsoft explains in the following link (http://msdn.microsoft.com/en-us/library/windows/desktop/dd375454(v=vs.85).aspx) is no longer included on newer SDKso I think is not a good solution because is deprecated.

Then I came across with Media Foundation, which relplaces DES, but sincerely I'm a bit lost and I've not been able to find good and compiling examples for me to test.

So could anyone clarify if I could achieve what I need with Media Foundation? If yes, how to begin with? If not, there's another solution?

Any help will be appreciated!

Note: I'm using Visual C++ 2010 Express if matters.

Thanks,

Adrià.


Using MP3 decoder directly, without session

Hello,

I wish to use the MP3 decoder object on Windows 7, to feed him some MP3 data, and read back some PCM samples. I'm having a hard time using the API to make it work. I'm wondering if anyone could manage something like this, or if anyone knows of an exemple where it is used like that. I've look into many open source project, but found no one using this...

I've read from http://social.msdn.microsoft.com/Forums/en-US/a7dd62e4-6d65-433c-a715-43b4a39230bd/mp3-decoder-filter-missing-in-vista?forum=windowsdirectshowdevelopment that the decoder might be deliberately crippled, but if so, why put it in the documentation ? Maybe I have to use it with the DirectShow APIs  ?? 

So here is my problem. I successfully instantiate the CLSID_CMP3DecMediaObject object with CoCreateInstance, I can use GetStreamCount which gives me 1 input and 0 output :

hr = CoCreateInstance(&CLSID_CMP3DecMediaObject, NULL, CLSCTX_INPROC_SERVER, &IID_IMFTransform, &mp3CoObject);
hr = mp3CoObject->GetStreamCount(mp3CoObject, &cInputStreams,&cOutputStreams);

Then, I cannot configure it further.

When I call SetInputType with MP3 format on it, I get a MF_E_INVALIDMEDIATYPE error. Maybe I give him a bad media type (see next lines), but I don't really know what to give...

MFCreateMediaType(&inputMediaType);
hr = inputMediaType->SetGUID(inputMediaType, &MF_MT_MAJOR_TYPE, &MFMediaType_Audio);
hr = inputMediaType->SetGUID(inputMediaType, &MF_MT_SUBTYPE, &MFAudioFormat_MP3);
hr = inputMediaType->SetUINT32(inputMediaType, &MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE);

Then, ignoring this error, I get MF_E_TRANSFORM_TYPE_NOT_SET on SetOutputType or GetOutputAvailableType call.

I've got the same kind of problem using the DMO apis btw...

Any idea please ?

Fred


Decoding AAC with MFT gives only every second audio frame

I'm using the Windows build in AAC MFTransform filter for decoding AAC audio. Using the methods described in "AAC Decoder" and "Basic MFTransform usage". The method works fine for other codecs, like Dolby.

However for AAC audio I get the error MF_E_TRANSFORM_NEED_MORE_INPUT for every even frame I send. So I send another frame, these frames gives OK. But on output the filter delivers only each second frame, the other is lost. I tried a lot of additional sample parameters and tried to get out the other frame, but no success. Could also not find any hint on the Internet.

My shortest code sample (some parameters are hard wired - 2 channel, 48 kHz)

#	include <windows.h>
#	include <Mfapi.h>
#	include <Mfidl.h>
#	include <Mfreadwrite.h>
#	include <wmcodecdsp.h>

void main ()
{
	HRESULT hrError ;
	ULONG dwFlags ;
	IMFTransform * pDecoder = 0 ;
	IMFMediaType * pMediaTypeIn = 0 ;
	IMFMediaType * pOutType = 0 ;
	IMFSourceReader * pReader = 0 ;
	IMFMediaType * partialMediaType = 0 ;
	unsigned int unChannel, unSampling ;


	// init MF decoder, using Microsoft hard wired GUID

	hrError = CoInitializeEx (0, COINIT_APARTMENTTHREADED) ;
	hrError = MFStartup (MF_VERSION, MFSTARTUP_FULL) ;
	hrError = CoCreateInstance (IID_CMSAACDecMFT, 0, CLSCTX_INPROC_SERVER, IID_IMFTransform, (void **) & pDecoder) ;


	// setup decoder input type, hard wired for given sample audio clip

	unChannel = 2 ;
	unSampling = 48000 ;

	hrError = MFCreateMediaType (& pMediaTypeIn) ;
	hrError = pMediaTypeIn->SetGUID (MF_MT_MAJOR_TYPE, MFMediaType_Audio) ;
	hrError = pMediaTypeIn->SetGUID (MF_MT_SUBTYPE, MFAudioFormat_AAC) ;

	hrError = pMediaTypeIn->SetUINT32 (MF_MT_AAC_AUDIO_PROFILE_LEVEL_INDICATION, 0x2A /*DP4MEDIA_MP4A_AUDIO_PLI_AAC_L4*/) ;
	hrError = pMediaTypeIn->SetUINT32 (MF_MT_AAC_PAYLOAD_TYPE, 1) ;		// payload 1 = ADTS header
	hrError = pMediaTypeIn->SetUINT32 (MF_MT_AUDIO_BITS_PER_SAMPLE, 16) ;
	hrError = pMediaTypeIn->SetUINT32 (MF_MT_AUDIO_CHANNEL_MASK, (unChannel == 2 ? 0x03 : 0x3F)) ;
	hrError = pMediaTypeIn->SetUINT32 (MF_MT_AUDIO_NUM_CHANNELS, (UINT32) unChannel) ;
	hrError = pMediaTypeIn->SetUINT32 (MF_MT_AUDIO_SAMPLES_PER_SECOND, (UINT32) unSampling) ;

	// prepare additional user data (according to MS documentation for AAC decoder)

	byte arrUser [] = {0x01, 0x00, 0xFE, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11, 0x90} ;	// hard wired for 2 channel, 48000 kHz
	pMediaTypeIn->SetBlob (MF_MT_USER_DATA, arrUser, 14) ;

	hrError = pDecoder->SetInputType (0, pMediaTypeIn, 0) ;
	pMediaTypeIn->Release () ;


	// set out type

	MFCreateMediaType (& pOutType) ;

	pOutType->SetGUID (MF_MT_MAJOR_TYPE, MFMediaType_Audio) ;
	pOutType->SetGUID (MF_MT_SUBTYPE, MFAudioFormat_PCM) ;

	pOutType->SetUINT32 (MF_MT_AUDIO_BITS_PER_SAMPLE, 16) ;
	pOutType->SetUINT32 (MF_MT_AUDIO_SAMPLES_PER_SECOND, 48000) ;
	pOutType->SetUINT32 (MF_MT_AUDIO_NUM_CHANNELS, 2) ;

	hrError = pDecoder->SetOutputType (0, pOutType, 0) ;
	pOutType->Release () ;


	// get source reader - will read AAC samples

	hrError = MFCreateSourceReaderFromURL (L"Sand.aac", 0, & pReader) ;
	hrError = MFCreateMediaType (& partialMediaType) ;

	partialMediaType->SetGUID (MF_MT_MAJOR_TYPE, MFMediaType_Audio) ;
	partialMediaType->SetGUID (MF_MT_SUBTYPE, MFAudioFormat_AAC) ;		// create AAC from AAC -> frame reader

	pReader->SetCurrentMediaType (MF_SOURCE_READER_FIRST_AUDIO_STREAM, 0, partialMediaType) ;
	partialMediaType->Release () ;


	// write wave file header
	// ...

	// get input buffer

	hrError = pDecoder->ProcessMessage (MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, 0) ;
	hrError = pDecoder->ProcessMessage (MFT_MESSAGE_NOTIFY_START_OF_STREAM, 0) ;

	do
	{
		// read sample

		IMFSample * pSampleIn, * pSampleOut ;
		LONGLONG timestamp;
		DWORD actualStreamIndex;
		IMFMediaBuffer * pBufferOut ;
		IMFMediaBuffer * pBuffer ;
		MFT_OUTPUT_STREAM_INFO xOutputInfo ;
		MFT_OUTPUT_DATA_BUFFER arrOutput [1] ;
		DWORD dwStatus ;
		DWORD dwLength ;
		BYTE * pAudioData ;
		DWORD cbBuffer;
		DWORD pcbMaxLength;

		pReader->ReadSample (MF_SOURCE_READER_FIRST_AUDIO_STREAM, 0, & actualStreamIndex, & dwFlags, & timestamp, & pSampleIn);
		if (dwFlags != 0)
			break ;


		// process input buffer

		hrError = pDecoder->ProcessInput (0, pSampleIn, 0) ;
		pSampleIn->Release () ;


		// get LPCM output

		pSampleOut = 0 ;
		pBufferOut = 0 ;

		hrError = pDecoder->GetOutputStreamInfo (0, & xOutputInfo) ;
		hrError = MFCreateSample (& pSampleOut) ;
		hrError = MFCreateMemoryBuffer (xOutputInfo.cbSize, & pBufferOut) ;
		hrError = pBufferOut->SetCurrentLength (0) ;
		hrError = pSampleOut->AddBuffer (pBufferOut) ;

		arrOutput [0].dwStreamID = 0 ;
		arrOutput [0].dwStatus = 0 ;
		arrOutput [0].pEvents = 0 ;
		arrOutput [0].pSample = pSampleOut ;

		dwStatus = 0 ;

		// output gives only each second frame :(

		hrError = pDecoder->ProcessOutput (0, 1, arrOutput, & dwStatus) ;
		hrError = pBufferOut->GetCurrentLength (& dwLength) ;

		pSampleOut->ConvertToContiguousBuffer (& pBuffer);
		pBuffer->Lock (& pAudioData, & pcbMaxLength, & cbBuffer);
		// write wave file

		pBuffer->Unlock ();

		pBuffer->Release () ;
		pBufferOut->Release () ;
		pSampleOut->Release () ;
	}
	while (dwFlags == 0) ;
}

We wait for new MIDI API...

Hello.

Developpers and musicians need a new MIDI API.

Here is the legacy API for MIDI programming : MIDI

Structures naming are awfull, it is not object oriented nor COM.

Microsoft can do a better API.

We need a new MIDI API for Windows Seven. An API driver's compliant with the new Windows driver architecture, even if MIDI does not need this (indeed it does not require high performance). It's seems some problems are coming because of this : MIDI problem

..."Thanks for the report; we’re investigating the issue."...

Do not investigate, please make a new API we could use under MediaFoundation. You can't let a legacy API, thinking it will work just fine with new Windows OS. There is a time to make it compliant with new windows OS, and new programming style.

I will work on the legacy API, but i would like to use a MediaFoundation API like, with COM and OOO. I don't want to work with unstable API for the future OS.

And please answer to the question : What is the recommended API for MIDI development on Vista? Oups now there is Windows 8.1...

Windows has always been a multimedia OS for everyone... musicians including.




SetItemInfo for a playlist directly...not through the embedded AxWindowsMediaPlayer1

I have a program that creates and accesses Windows Media Playlists (for audio / *.wma files).

Recently, I have tried to set attributes, other than: Title and Author. When I try using SetItemInfo("<attribute>","<attribute value>"), the <attribute> does not save back to the permanent *.wpl file. In other words, if I use for example:

wmpMyPlaylistCollection = AxWindowsMediaPlayer1.playlistCollection
wmpMyPLArray = wmpMyPlaylistCollection.getByName(<SomePlayListName>)

wmpMyPlaylist.setItemInfo("UserCustom1", "ThisIsATest")

and then try and access it:

Debug.Print("UserCustom1: " & wmpMyPlaylist.getItemInfo("UserCustom1"))

This is my output:

UserCustom1: ThisIsATest

And if I use the code:

For IX = 0 To wmpMyPlaylist.attributeCount - 1
  Debug.Print(IX & "). " & wmpMyPlaylist.attributeName(IX).ToString)
Next

one of the Debug.Print statements will produce: UserCustom1

However, if I go back and re-select the wmpMyPlaylist from the library via the first 2 lines of the code:

wmpMyPlaylistCollection = AxWindowsMediaPlayer1.playlistCollection
wmpMyPLArray = wmpMyPlaylistCollection.getByName(<SomePlayListName>)
Then:
Debug.Print("UserCustom1: " & wmpMyPlaylist.getItemInfo("UserCustom1"))

will produce:
UserCustom1: <and a blank here>

and the for next loop:

For IX = 0 To wmpMyPlaylist.attributeCount - 1
  Debug.Print(IX & "). " & wmpMyPlaylist.attributeName(IX).ToString)
Next

will not have the entry: UserCustom1

By doing some research, I found the MSDN Library entry for:

IWMPMedia::setItemInfo method

which states in the "Remarks" section:

If you embed the Windows Media Player control in your application, file attributes that you change will not be written to the digital media file until the user runs Windows Media Player.

So...how can I call the setItemInfo against the playlist, without running it through the embedded "AxWindowsMediaPlayer1".

Anybody know how to do this?

Thanks for your time in advance.



Paul D. Goldstein Forceware Systems, Inc.

Is PVP-OPM workable under any Hardware Video card? 0x80070017 is returned ,the application call OPMGetVideoOutputsFromHMONITOR

when 0x80070017 is returned ,the application call OPMGetVideoOutputsFromHMONITOR?

the hardware is HP DV6-6100.

the os Name is Microsoft  windows 7 Home Premium.

System Type is x64-baseed PC.

Windows 7 Vs Windows 8 merging mp4 files

I am really hoping somebody answers my question in this forum finally!!

Here is the issue.

I am trying to merge mp4 files into one.

It works fine in Windows 7 but not in Windows 8.

Let me explain.

I create two mp4 files. They have the exact same parameters as far as audio and video is concerned. So, I simply employ MFCopy style code to read samples from two files and write it to new mp4 file.

The two input mp4 files are encoded using standard media foundation writer mechanism.

This works great in Windows 7.

However, in Windows 8, the merged file is blank after the first input video is done.

So, in windows 8, the merged file plays fine till the part where the first video is. Then it is black. Always. It is consistent.

I then looked at meta data info using mediainfo. The only difference in the files was "ReFrames".

On Win 7, mp4 files have ReFrames -> 1

On Win 8, mp4 files have ReFrames -> 2

To add to the twist, I take all the files encoded in Win 8 (the ones which has ReFrames of 2), put it on win 7 system and merge them and it is _fine_

Can anybody please help me. I have sample project and input files I can share. It really is a consistent bug.

Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>